# Low-resource Language Model
Twibert
MIT
TwiBERT is a pre-trained language model specifically designed for the Twi language, widely used in Ghana and West Africa.
Large Language Model
Transformers Other

T
sakrah
16
3
Myanberta
Apache-2.0
MyanBERTa is a Burmese pre-trained language model based on the BERT architecture, pre-trained on a Burmese dataset containing 5,992,299 sentences.
Large Language Model
Transformers Other

M
UCSYNLP
91
4
Wav2vec2 Xlsr Nepali
Apache-2.0
This model is a fine-tuned Nepali speech recognition model based on facebook/wav2vec2-large-xlsr-53.
Speech Recognition
W
shishirAI
22
2
Electra Tagalog Small Uncased Generator
Gpl-3.0
This is an ELECTRA model designed specifically for Filipino language, used for generating synthetic text and pre-training discriminators.
Large Language Model
Transformers Other

E
jcblaise
18
0
Gpt2 Small Arabic Poetry
An Arabic poetry generation model fine-tuned from gpt2-small-arabic, trained on 40,000 Arabic poems from different periods
Large Language Model Arabic
G
akhooli
33
5
Sundanese Roberta Base
MIT
A Sundanese masked language model based on the RoBERTa architecture, trained on multiple datasets.
Large Language Model Other
S
w11wo
32
2
Indo Roberta Small
MIT
Indonesian Small RoBERTa is a masked language model based on the RoBERTa architecture, specifically trained for Indonesian language, suitable for text infilling and feature extraction tasks.
Large Language Model Other
I
w11wo
50
1
Gpt2 Turkish Cased
GPT-2 model trained for Turkish text, serving as a starting point for text generation tasks
Large Language Model Other
G
redrussianarmy
1,060
14
Bangla Bert
A Bengali language model pre-trained based on the BERT architecture, supporting masked language modeling tasks
Large Language Model
Transformers Other

B
Kowsher
17
4
Featured Recommended AI Models